CASCADE ERROR PROJECTION LEARNING TllEORY

نویسندگان

  • Tuan A. Duong
  • Allen R. Stubberud
  • Taher Daud
چکیده

Tuan A. Duong”+, Allen R. Stubberud+, and Taher Daud* “ Center for Space Microelectronics Technology Jet Propulsion Laboratory, California Institute of Technology Pasadena, CA 91109 + Department of Electrical Engineering, University of California, Irvine Irvine, CA 92717 Abs[ract: Ifie cascade correlation based neural network learning algorithm has drwwvt a lot of attention because of its enhanced learning capability. It overcomes significant drawbacks of error backpropagalion (EBP) in that (1) it is no longer constrained 10 a fixed architecture via a preallocation of the number of hidden units, (2) it features seiec(ive weigh( [raining as opposed to EBP’s global weight training. In addition, jiJom a hardware implementation perspective, nel works based on the cascade correlation algorithm require sign ijcantly less complex synaptic weight circuitry than those reqliired by EBP. We present a rnathernalical analysis for a new scheme termed Cascade Error Projection (CEP) and show tha( (he same is also applicable to Cascade Correlation. In CEP, it is shown that there exis!s, al least, a non zero set of weights which is calculaledfiom afine space and that convergence is assured because the net work salisfies the Liapunov criteria, iti added hidden units domain rather than in the time domain. The CEP technique is faster to execute because part of the weigh(s are dcterrnin is[ical[y obtained, and the learning of weights fiorn the inputs to each added hidden unit is perjorrned as a single layer percepiron learning with previously obtained weights frozen. In addition, Ihc initial weights s[arl oul with a zero value for e~’ery newly added unit, and a single hidden unit is applied inslead of using a pool of candidate hidden units as for cascade correlation, lfiereby,

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

- . Cascade Error Projection Learning Algorithm

In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to ob(ain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthemlore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calc...

متن کامل

Convergence Analysis of Cascade Error Projection - An Efficient Learning Algorithm for Hardware Implementation

In this paper, we present a mathematical foundation, including a convergence analysis, for cascading architecture neural network. Our analysis also shows that the convergence of the cascade architecture neural network is assured because it satisfies Liapunov criteria, in an added hidden unit domain rather than in the time domain. From this analysis, a mathematical foundation for the cascade cor...

متن کامل

Cascade Error Projection: A Learning Algorithm for Hardware Implementation

In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error ProJection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calc...

متن کامل

Cascade error projection with low bit weight quantization for high order correlation data

In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to c...

متن کامل

Revisiting the Nystrom method for improved large-scale machine learning

We reconsider randomized algorithms for the low-rank approximation of symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel matrices that arise in data analysis and machine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our resul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996